Product Code Database
Example Keywords: linux -the $64-184
barcode-scavenger
   » » Wiki: Dot Product
Tag Wiki 'Dot Product'.
Tag

In , the dot product or scalar productThe term scalar product means literally "product with a scalar as a result". It is also used for other symmetric bilinear forms, for example in a pseudo-Euclidean space. Not to be confused with scalar multiplication. is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors), and returns a single number. In Euclidean geometry, the dot product of the Cartesian coordinates of two is widely used. It is often called the inner product (or rarely the projection product) of , even though it is not the only inner product that can be defined on Euclidean space (see Inner product space for more). It should not be confused with the .

Algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the of the angle between them. These definitions are equivalent when using Cartesian coordinates. In modern , are often defined by using . In this case, the dot product is used for defining lengths (the length of a vector is the of the dot product of the vector by itself) and angles (the cosine of the angle between two vectors is the of their dot product by the product of their lengths).

The name "dot product" is derived from the  " that is often used to designate this operation; the alternative name "scalar product" emphasizes that the result is a scalar, rather than a vector (as with the in three-dimensional space).


Definition
The dot product may be defined algebraically or geometrically. The geometric definition is based on the notions of angle and distance (magnitude) of vectors. The equivalence of these two definitions relies on having a Cartesian coordinate system for Euclidean space.

In modern presentations of Euclidean geometry, the points of space are defined in terms of their Cartesian coordinates, and itself is commonly identified with the real coordinate space \mathbf{R}^n. In such a presentation, the notions of length and angle are defined by means of the dot product. The length of a vector is defined as the of the dot product of the vector by itself, and the of the (non oriented) angle between two vectors of length one is defined as their dot product. So the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and the modern formulations of Euclidean geometry.


Coordinate definition
The dot product of two vectors \mathbf{a} = a_1, and specified with respect to an orthonormal basis, is defined as:
(2025). 9780071543521, McGraw Hill.
\mathbf a \cdot \mathbf b = \sum_{i=1}^n a_i b_i = a_1 b_1 + a_2 b_2 + \cdots + a_n b_n where \Sigma () denotes and n is the of the . For instance, in three-dimensional space, the dot product of vectors and is: \begin{align} \ 1, \cdot 4, &= (1 \times 4) + (3\times-2) + (-5\times-1) \\ &= 4 - 6 + 5 \\ &= 3 \end{align}

Likewise, the dot product of the vector with itself is: \begin{align} \ 1, \cdot 1, &= (1 \times 1) + (3\times 3) + (-5\times -5) \\ &= 1 + 9 + 25 \\ &= 35 \end{align}

If vectors are identified with , the dot product can also be written as a matrix product \mathbf a \cdot \mathbf b = \mathbf a^{\mathsf T} \mathbf b, where \mathbf a{^\mathsf T} denotes the of \mathbf a.

Expressing the above example in this way, a 1 × 3 matrix () is multiplied by a 3 × 1 matrix () to get a 1 × 1 matrix that is identified with its unique entry: \begin{bmatrix}

 1 & 3 & -5
     
\end{bmatrix} \begin{bmatrix}
 4 \\ -2 \\ -1
     
\end{bmatrix} = 3 \, .


Geometric definition
In , a is a geometric object that possesses both a magnitude and a direction. A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction to which the arrow points. The magnitude of a vector \mathbf{a} is denoted by \left\| \mathbf{a} \right\| . The dot product of two Euclidean vectors \mathbf{a} and \mathbf{b} is defined by
(2025). 9780071615457, McGraw Hill.
\mathbf{a}\cdot\mathbf{b}= \left\|\mathbf{a}\right\| \left\|\mathbf{b}\right\|\cos\theta , where \theta is the between \mathbf{a} and \mathbf{b}.

In particular, if the vectors \mathbf{a} and \mathbf{b} are (i.e., their angle is \frac{\pi}{2} or 90^\circ), then \cos \frac \pi 2 = 0, which implies that \mathbf a \cdot \mathbf b = 0 . At the other extreme, if they are , then the angle between them is zero with \cos 0 = 1 and \mathbf a \cdot \mathbf b = \left\| \mathbf a \right\| \, \left\| \mathbf b \right\| This implies that the dot product of a vector \mathbf{a} with itself is \mathbf a \cdot \mathbf a = \left\| \mathbf a \right\| ^2 , which gives \left\| \mathbf a \right\| = \sqrt{\mathbf a \cdot \mathbf a} , the formula for the of the vector.


Scalar projection and first properties
The scalar projection (or scalar component) of a Euclidean vector \mathbf{a} in the direction of a Euclidean vector \mathbf{b} is given by a_b = \left\| \mathbf a \right\| \cos \theta , where \theta is the angle between \mathbf{a} and \mathbf{b}.

In terms of the geometric definition of the dot product, this can be rewritten as a_b = \mathbf a \cdot \widehat{\mathbf b} , where \widehat{\mathbf b} = \mathbf b / \left\| \mathbf b \right\| is the in the direction of \mathbf{b}.

The dot product is thus characterized geometrically by

(2025). 9780120598250, .
\mathbf a \cdot \mathbf b = a_b \left\| \mathbf{b} \right\| = b_a \left\| \mathbf{a} \right\| . The dot product, defined in this manner, is homogeneous under scaling in each variable, meaning that for any scalar \alpha, ( \alpha \mathbf{a} ) \cdot \mathbf b = \alpha ( \mathbf a \cdot \mathbf b ) = \mathbf a \cdot ( \alpha \mathbf b ) . It also satisfies the , meaning that \mathbf a \cdot ( \mathbf b + \mathbf c ) = \mathbf a \cdot \mathbf b + \mathbf a \cdot \mathbf c .

These properties may be summarized by saying that the dot product is a . Moreover, this bilinear form is positive definite, which means that \mathbf a \cdot \mathbf a is never negative, and is zero if and only if \mathbf a = \mathbf 0 , the .


Equivalence of the definitions
If \mathbf{e}_1,\cdots,\mathbf{e}_n are the in \mathbf{R}^n, then we may write \begin{align} \mathbf a &= a_1 = \sum_i a_i \mathbf e_i \\ \mathbf b &= b_1 = \sum_i b_i \mathbf e_i. \end{align} The vectors \mathbf{e}_i are an orthonormal basis, which means that they have unit length and are at right angles to each other. Since these vectors have unit length, \mathbf e_i \cdot \mathbf e_i = 1 and since they form right angles with each other, if i\neq j, \mathbf e_i \cdot \mathbf e_j = 0 . Thus in general, we can say that: \mathbf e_i \cdot \mathbf e_j = \delta_ {ij} , where \delta_{ij} is the .

Also, by the geometric definition, for any vector \mathbf{e}_i and a vector \mathbf{a}, we note that \mathbf a \cdot \mathbf e_i = \left\| \mathbf a \right\| \left\| \mathbf e_i \right\| \cos \theta_i = \left\| \mathbf a \right\| \cos \theta_i = a_i , where a_i is the component of vector \mathbf{a} in the direction of \mathbf{e}_i. The last step in the equality can be seen from the figure.

Now applying the distributivity of the geometric version of the dot product gives \mathbf a \cdot \mathbf b = \mathbf a \cdot \sum_i b_i \mathbf e_i = \sum_i b_i ( \mathbf a \cdot \mathbf e_i ) = \sum_i b_i a_i= \sum_i a_i b_i , which is precisely the algebraic definition of the dot product. So the geometric dot product equals the algebraic dot product.


Properties
The dot product fulfills the following properties if \mathbf{a}, \mathbf{b}, \mathbf{c} and \mathbf{d} are real vectors and \alpha, \beta, \gamma and \delta are scalars.

\mathbf{a} \cdot \mathbf{b} = \mathbf{b} \cdot \mathbf{a} , which follows from the definition (\theta is the angle between \mathbf{a} and \mathbf{b})
\theta = \operatorname{arccos}\left( \frac{\mathbf{a}\cdot\mathbf{b}}{\left\|\mathbf{a}\right\| \left\|\mathbf{b}\right\|} \right).
(additive, distributive and scalar-multiplicative in both arguments)
\begin{align} (\alpha \mathbf{a} + \beta\mathbf{b})&\cdot (\gamma\mathbf{c}+\delta\mathbf{d}) \\ &=\alpha\gamma(\mathbf{a}\cdot\mathbf{c}) + \alpha\delta(\mathbf{a}\cdot\mathbf{d}) +\beta\gamma(\mathbf{b}\cdot\mathbf{c}) +\beta\delta(\mathbf{b}\cdot\mathbf{d}) . \end{align}
Two non-zero vectors \mathbf{a} and \mathbf{b} are orthogonal if and only if \mathbf{a} \cdot \mathbf{b} = 0.
No
Unlike multiplication of ordinary numbers, where if ab=ac, then b always equals c unless a is zero, the dot product does not obey the : If \mathbf{a}\cdot\mathbf{b}=\mathbf{a}\cdot\mathbf{c} and \mathbf{a}\neq\mathbf{0}, then we can write: \mathbf{a}\cdot(\mathbf{b}-\mathbf{c}) = 0 by the ; the result above says this just means that \mathbf{a} is perpendicular to (\mathbf{b}-\mathbf{c}), which still allows (\mathbf{b}-\mathbf{c})\neq\mathbf{0}, and therefore allows \mathbf{b}\neq\mathbf{c}.
If \mathbf{a} and \mathbf{b} are vector-valued differentiable functions, then the derivative (denoted by a prime {}') of \mathbf{a}\cdot\mathbf{b} is given by the rule (\mathbf{a}\cdot\mathbf{b})' = \mathbf{a}'\cdot\mathbf{b} + \mathbf{a}\cdot\mathbf{b}'.


Application to the law of cosines
Given two vectors {\color{red}\mathbf{a}} and {\color{blue}\mathbf{b}} separated by angle \theta (see the upper image), they form a triangle with a third side {\color{orange}\mathbf{c}} = {\color{red}\mathbf{a}} - {\color{blue}\mathbf{b}}. Let \color{red}a, \color{blue}b and \color{orange}c denote the lengths of {\color{red}\mathbf{a}}, {\color{blue}\mathbf{b}}, and {\color{orange}\mathbf{c}}, respectively. The dot product of {\color{orange}\mathbf{c}} with itself is: \begin{align} \mathbf{\color{orange}c} \cdot \mathbf{\color{orange}c} & = ( \mathbf{\color{red}a} - \mathbf{\color{blue}b}) \cdot ( \mathbf{\color{red}a} - \mathbf{\color{blue}b} ) \\
& = \mathbf{\color{red}a} \cdot \mathbf{\color{red}a} - \mathbf{\color{red}a} \cdot \mathbf{\color{blue}b} - \mathbf{\color{blue}b} \cdot \mathbf{\color{red}a} + \mathbf{\color{blue}b} \cdot \mathbf{\color{blue}b} \\
& = {\color{red}a}^2 - \mathbf{\color{red}a} \cdot \mathbf{\color{blue}b} - \mathbf{\color{red}a} \cdot \mathbf{\color{blue}b} + {\color{blue}b}^2 \\
& = {\color{red}a}^2 - 2 \mathbf{\color{red}a} \cdot \mathbf{\color{blue}b} + {\color{blue}b}^2 \\
     
{\color{orange}c}^2 & = {\color{red}a}^2 + {\color{blue}b}^2 - 2 {\color{red}a} {\color{blue}b} \cos \mathbf{\color{purple}\theta} \\ \end{align} which is the law of cosines.


Triple product
There are two ternary operations involving dot product and .

The scalar triple product of three vectors is defined as \mathbf{a} \cdot ( \mathbf{b} \times \mathbf{c} ) = \mathbf{b} \cdot ( \mathbf{c} \times \mathbf{a} )=\mathbf{c} \cdot ( \mathbf{a} \times \mathbf{b} ). Its value is the of the matrix whose columns are the Cartesian coordinates of the three vectors. It is the signed of the defined by the three vectors, and is to the three-dimensional special case of the of three vectors.

The vector triple product is defined by \mathbf{a} \times ( \mathbf{b} \times \mathbf{c} ) = ( \mathbf{a} \cdot \mathbf{c} )\, \mathbf{b} - ( \mathbf{a} \cdot \mathbf{b} )\, \mathbf{c} . This identity, also known as Lagrange's formula, as "ACB minus ABC", keeping in mind which vectors are dotted together. This formula has applications in simplifying vector calculations in .


Physics
In , the dot product takes two vectors and returns a scalar quantity. It is also known as the "scalar product". The dot product of two vectors can be defined as the product of the magnitudes of the two vectors and the cosine of the angle between the two vectors. Thus, \mathbf{a} \cdot \mathbf{b} = |\mathbf{a}| \, |\mathbf{b}| \cos \theta Alternatively, it is defined as the product of the projection of the first vector onto the second vector and the magnitude of the second vector.

For example:

(2025). 9780521861533, Cambridge University Press. .
(2025). 9780470746370, John Wiley & Sons.


Generalizations

Complex vectors
For vectors with entries, using the given definition of the dot product would lead to quite different properties. For instance, the dot product of a vector with itself could be zero without the vector being the zero vector (e.g. this would happen with the vector This in turn would have consequences for notions like length and angle. Properties such as the positive-definite norm can be salvaged at the cost of giving up the symmetric and bilinear properties of the dot product, through the alternative definition
(2025). 9780486780559, Dover.
\mathbf{a} \cdot \mathbf{b} = \sum_i , where \overline{b_i} is the complex conjugate of b_i. When vectors are represented by , the dot product can be expressed as a involving a conjugate transpose, denoted with the superscript H: \mathbf{a} \cdot \mathbf{b} = \mathbf{b}^\mathsf{H} \mathbf{a} .

In the case of vectors with real components, this definition is the same as in the real case. The dot product of any vector with itself is a non-negative real number, and it is nonzero except for the zero vector. However, the complex dot product is rather than bilinear, as it is and not linear in \mathbf{a}. The dot product is not symmetric, since \mathbf{a} \cdot \mathbf{b} = \overline{\mathbf{b} \cdot \mathbf{a}} . The angle between two complex vectors is then given by \cos \theta = \frac{\operatorname{Re} ( \mathbf{a} \cdot \mathbf{b} )}{ \left\| \mathbf{a} \right\| \left\| \mathbf{b} \right\| } .

The complex dot product leads to the notions of and general inner product spaces, which are widely used in mathematics and .

The self dot product of a complex vector \mathbf{a} \cdot \mathbf{a} = \mathbf{a}^\mathsf{H} \mathbf{a} , involving the conjugate transpose of a row vector, is also known as the norm squared, \mathbf{a} \cdot \mathbf{a} = \|\mathbf{a}\|^2, after the ; it is a vector generalization of the of a complex scalar (see also: Squared Euclidean distance).


Inner product
The inner product generalizes the dot product to over a field of scalars, being either the field of \R or the field of \Complex . It is usually denoted using by \left\langle \mathbf{a} \, , \mathbf{b} \right\rangle .

The inner product of two vectors over the field of complex numbers is, in general, a complex number, and is sesquilinear instead of bilinear. An inner product space is a normed vector space, and the inner product of a vector with itself is real and positive-definite.


Functions
The dot product is defined for vectors that have a finite number of entries. Thus these vectors can be regarded as discrete functions: a length-n vector u is, then, a function with domain \{k\in\mathbb{N}:1\leq k \leq n\}, and u_i is a notation for the image of i by the function/vector u.

This notion can be generalized to square-integrable functions: just as the inner product on vectors uses a sum over corresponding components, the inner product on functions is defined as an integral over some (X, \mathcal{A}, \mu): \left\langle u , v \right\rangle = \int_X u v \, \text{d} \mu.

For example, if f and g are continuous functions over a K of \mathbb{R}^n with the standard , the above definition becomes: \left\langle f , g \right\rangle = \int_K f(\mathbf{x}) g(\mathbf{x}) \, \operatorname{d}^n \mathbf{x} .

Generalized further to \psi and \chi, by analogy with the complex inner product above, gives: \left\langle \psi, \chi \right\rangle = \int_K \psi(z) \overline{\chi(z)} \, \text{d} z.


Weight function
Inner products can have a (i.e., a function which weights each term of the inner product with a value). Explicitly, the inner product of functions u(x) and v(x) with respect to the weight function r(x)>0 is \left\langle u , v \right\rangle_r = \int_a^b r(x) u(x) v(x) \, d x.


Dyadics and matrices
A double-dot product for matrices is the Frobenius inner product, which is analogous to the dot product on vectors. It is defined as the sum of the products of the corresponding components of two matrices \mathbf{A} and \mathbf{B} of the same size: \mathbf{A} : \mathbf{B} = \sum_i \sum_j A_{ij} \overline{B_{ij}} = \operatorname{tr} ( \mathbf{B}^\mathsf{H} \mathbf{A} ) = \operatorname{tr} ( \mathbf{A} \mathbf{B}^\mathsf{H} ) . And for real matrices, \mathbf{A} : \mathbf{B} = \sum_i \sum_j A_{ij} B_{ij} = \operatorname{tr} ( \mathbf{B}^\mathsf{T} \mathbf{A} ) = \operatorname{tr} ( \mathbf{A} \mathbf{B}^\mathsf{T} ) = \operatorname{tr} ( \mathbf{A}^\mathsf{T} \mathbf{B} ) = \operatorname{tr} ( \mathbf{B} \mathbf{A}^\mathsf{T} ) .

Writing a matrix as a , we can define a different double-dot product (see ) however it is not an inner product.


Tensors
The inner product between a of order n and a tensor of order m is a tensor of order n+m-2, see Tensor contraction for details.


Computation

Algorithms
The straightforward algorithm for calculating a floating-point dot product of vectors can suffer from catastrophic cancellation. To avoid this, approaches such as the Kahan summation algorithm are used.


Libraries
A dot product function is included in:
  • level 1 real , ; complex , , ,
  • as or
  • Julia as   or standard library LinearAlgebra as
  • R (programming language) as for vectors or, more generally for matrices, as
  • as    or    or    or  
  • Python (package ) as    or    or  
  • as  , and similar code as Matlab
  • Intel oneAPI Math Kernel Library real p?dot ; complex p?dotc


See also


Notes

External links

Page 1 of 1
1
Page 1 of 1
1

Account

Social:
Pages:  ..   .. 
Items:  .. 

Navigation

General: Atom Feed Atom Feed  .. 
Help:  ..   .. 
Category:  ..   .. 
Media:  ..   .. 
Posts:  ..   ..   .. 

Statistics

Page:  .. 
Summary:  .. 
1 Tags
10/10 Page Rank
5 Page Refs
1s Time